Do AI bots like ChatGPT threaten humanity if left unchecked?

您所在的位置:网站首页 humankind humanity Do AI bots like ChatGPT threaten humanity if left unchecked?

Do AI bots like ChatGPT threaten humanity if left unchecked?

2024-07-16 09:21:57| 来源: 网络整理| 查看: 265

This file photo taken on April 30, 2023 shows a ChatGPT screen. (Mainichi)

TAKASAKI, Gunma (Kyodo) -- Like nuclear weapons and biotechnology before them, artificial intelligence has brought the world to an existential crisis in which, according to some experts, humanity's future could be at risk if proper checks are not put in place on a global scale.

Against this backdrop, AI models like ChatGPT were high on the agenda during a two-day Group of Seven Digital and Tech Ministers' Meeting in Japan that concluded April 30, with policymakers agreeing on the urgent need for continued discussion on how to govern the rapidly advancing technology.

In its joint declaration, the G-7 agreed to promote "responsible" use of AI while calling for broader stakeholder participation in developing international standards for governance.

ChatGPT, launched in November 2022 as a prototype, stands for Chat Generative Pre-trained Transformer and is trained on massive amounts of data, enabling it to process and simulate human-like conversations with users.

Running on the GPT-3.5 model upon its initial release, the chatbot took the world by storm for the enormous number -- said to be 355 billion -- of adjustable variables it could use to generate text, a huge leap from previous language models that typically only had a few million parameters.

On March 14, U.S.-based developer OpenAI released the next iteration of the model, known as GPT-4, which is more powerful than its predecessors and has multimodal capabilities -- meaning it can take both text and images as prompts.

While the seemingly limitless potential for generative AI has caused concerns that technological development may be getting out of hand, expert opinions are varied on whether it may signal doom for humankind.

In March, the Future of Life Institute, a think tank focusing on the responsible development and use of technology, published an open letter calling for a minimum six-month pause in the training of AI systems more powerful than GPT-4.

Citing dangers such as perpetuating biases, misinformation, destabilizing labor markets and the concentration of power in a small number of corporations in an attached paper on policy recommendations, the letter collected over 27,000 signatures as of April 30, including from Tesla's Elon Musk, who co-founded OpenAI, and Apple Inc. co-founder Steve Wozniak.

Advanced AI systems "could themselves pursue goals, either human- or self-assigned, in ways that place negligible value on human rights, human safety, or, in the most harrowing scenarios, human existence," the think tank wrote.

Musk, who is one of the institute's backers, also sounded the alarm on hyper-intelligent AI in an interview with Fox News earlier this month, saying it "has the potential of civilization destruction."

"(If) we only put a regulation after something terrible has happened, it may be too late to actually put the regulations in place. The AI may be in control at that point," he said.

Katja Grace, co-founder and lead researcher of AI Impacts, a project that focuses on the long-term consequences of sophisticated AI, said she estimates there is a 19 percent chance humanity's failure to control AI will result in humankind's extinction.

"I think the biggest risk is that current progress leads soon to AI systems that are as good at making decisions about everything as current AI systems are at making decisions about Go or chess, and are strategizing to bring about objectives that are contrary to human welfare, leading to the destruction of humanity," she said.

The systems are still far from perfect, however. Inaccurate information presented as fact, dubbed hallucinations, remains a challenge in large language model technology, making them impossible to rely on for critical applications.

Satoshi Kurihara, chairman of the Ethics Committee of the Japanese Society for Artificial Intelligence, said AI currently only exists as a tool for humans, and "it is humans who will destroy humankind."

"I believe we can avoid extinction if we can learn to coexist with the highly autonomous and versatile AI that will likely become a reality in the future," he said during a recent written interview.

Kurihara stressed the need to adhere to guidelines like peace, cultural diversity and integrity, as well as controlling the scope and transparency of AI use, during the development of such advanced systems.

In the joint declaration issued this weekend, the G-7 said it recognized "the need to take stock in the near term of the opportunities and challenges" of generative AI and "continue promoting safety and trust" given its global prominence and fast-paced development.

Less apocalyptic but more immediate concerns surrounding generative AI models revolve around their unauthorized collection of user data, ability to manipulate public opinion and potential use for nefarious purposes like deepfakes and revenge pornography.

A repository managed by AI, Algorithmic, and Automation Incidents and Controversies, an initiative that tracks unethical misuse of AI, also logged several incidents of fake news and disinformation during Russia's invasion of Ukraine.

But Charlie Pownall, founder of AIAAIC, said that while it is important to curb such abuses, doing so "may prove difficult to regulate proportionately without unduly abusing user privacy, confidentiality, and other rights."

International regulation of AI is further complicated by differing attitudes toward technology around the globe.

Japan's emphasis on generative AI's potential utility, for example, means the government has so far taken a more cautious stance toward regulation than the European Union, which has proposed what it describes as the first-ever legal framework on AI.

"It seems unlikely that China, the U.S., EU, the United Kingdom and other major markets will be on the same page on many important aspects of AI legislation given political, economic and legal differences and the wide divergence in public perception and expectations of AI across different countries and cultures," Pownall said.

Font Size SML Print


【本文地址】

公司简介

联系我们

今日新闻


点击排行

实验室常用的仪器、试剂和
说到实验室常用到的东西,主要就分为仪器、试剂和耗
不用再找了,全球10大实验
01、赛默飞世尔科技(热电)Thermo Fisher Scientif
三代水柜的量产巅峰T-72坦
作者:寞寒最近,西边闹腾挺大,本来小寞以为忙完这
通风柜跟实验室通风系统有
说到通风柜跟实验室通风,不少人都纠结二者到底是不
集消毒杀菌、烘干收纳为一
厨房是家里细菌较多的地方,潮湿的环境、没有完全密
实验室设备之全钢实验台如
全钢实验台是实验室家具中较为重要的家具之一,很多

推荐新闻


图片新闻

实验室药品柜的特性有哪些
实验室药品柜是实验室家具的重要组成部分之一,主要
小学科学实验中有哪些教学
计算机 计算器 一般 打孔器 打气筒 仪器车 显微镜
实验室各种仪器原理动图讲
1.紫外分光光谱UV分析原理:吸收紫外光能量,引起分
高中化学常见仪器及实验装
1、可加热仪器:2、计量仪器:(1)仪器A的名称:量
微生物操作主要设备和器具
今天盘点一下微生物操作主要设备和器具,别嫌我啰嗦
浅谈通风柜使用基本常识
 众所周知,通风柜功能中最主要的就是排气功能。在

专题文章

    CopyRight 2018-2019 实验室设备网 版权所有 win10的实时保护怎么永久关闭